hysop.backend.host.fortran.operator.fortran_fftw module

class hysop.backend.host.fortran.operator.fortran_fftw.FortranFFTWOperator(input_fields, output_fields, **kwds)[source]

Bases: FortranOperator

Base class for Fortran FFTW interface.

Defines HySoP compatible fields requirements and initializes fftw2py interface.

Parameters:
  • input_field (dictionary of fields:topology with) – Field as keys

  • output_field (dictionary of fields:topology with) – Field as keys

discretize()[source]

By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.

After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.

self.discrete_fields will be a tuple containing all input and output discrete fields.

Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.

finalize(clean_fftw_solver=False, **kwds)[source]

Cleanup this node (free memory from external solvers, …) By default, this does nothing

get_field_requirements()[source]

Called just after handle_method(), ie self.method has been set. Field requirements are:

  1. required local and global transposition state, if any.

  2. required memory ordering (either C or Fortran)

Default is Backend.HOST, no min or max ghosts, MemoryOrdering.ANY and no specific default transposition state for each input and output variables.

get_node_requirements()[source]

Called after get_field_requirements to get global operator requirements.

By default we enforce unique:

*transposition state *cartesian topology shape *memory order (either C or fortran)

Across every fields.

get_work_properties()[source]

Returns extra memory requirements of this operator. This allows operators to request for temporary buffers that will be shared between operators in a graph to reduce the memory footprint and the number of allocations.

Returned memory is only usable during operator call (ie. in self.apply). Temporary buffers may be shared between different operators as determined by the graph builder.

By default if there is no input nor output temprary fields, this returns no requests, meanning that this node requires no extra buffers.

If temporary fields are present, their memory request are automatically computed and returned.

handle_topologies(input_topology_states, output_topology_states)[source]

Called just after all topologies have been set up by the graph builder. Once this has been called, an operator can retrieve its input and output topologies contained in self.input_fields and self.output_fields.

ComputationalGraph creates the topologies we need for the current operator by using TopologyDescriptors and FieldRequirements returned by get_field_requirements().

All topologies, input and output topology states have to comply with the operator field requirements obtained with self.get_field_requirements(), else this will raise a RuntimeError because it is an internal graph builder failure.

Notes

This function will take those generated topologies and build topology view on top of it to match input and output field requirements.

Graph generated field topologies are available as values of self.input_fields and self.output_fields and are mapped by continuous Field.

In addition input_topology_states and output_topology_states are passed as argument and contain input discrete topology states and output topology states that the graph builder determined.

Once this has been checked, topologies are replaced with TopologyView instances (wich is a Topology associated to a TopologyState). A TopologyView is basically a read-only view on top of a Topology with is altered by a TopologyState.

A TopologyState is a topology dependent state, and acts as a virtual state that determines how we should perceive raw mesh data.

It may for example include a transposition state for CartesianTopology topologies, resulting in the automatic permutation of attributes when fetching attributes (global_resolution and ghosts will be transposed). It may alors contain MemoryOrdering information to tell us to see the data either as C_CONTIGUOUS or F_CONTIGUOUS.

Thus self.input_fields and self.output_fields values will containi topology state dependent topologies after this call. This allows operator implementations that are state agnostic and thus much simpler to implement without errors.

RedistributeInter may not have both input and output topology states. In that specific case, (in-|out-)topology can be None

setup(work=None)[source]

Setup temporary buffer that have been requested in get_work_properties(). This function may be used to execute post allocation routines. This sets self.ready flag to True. Once this flag is set one may call ComputationalGraphNode.apply() and ComputationalGraphNode.finalize().

Automatically honour temporary field memory requests.

classmethod supports_mpi()[source]

Return True if this operator was implemented to support multiple mpi processes.